Human Values
11 pages tagged "Human Values"
What are "human values"?
Might an aligned superintelligence force people to change?
Isn’t it immoral to control and impose our values on AI?
Can we think of AIs as human-like?
If I only care about helping people alive today, does AI safety still matter?
Could we tell the AI to do what's morally right?
Wouldn't a superintelligence be smart enough to know right from wrong?
Why can’t we just use Asimov’s Three Laws of Robotics?
What is the orthogonality thesis?
What is "coherent extrapolated volition (CEV)"?
What is shard theory?